617 research outputs found
Optically thin composite resonant absorber at the near-infrared band: a polarization independent and spectrally broadband configuration
Cataloged from PDF version of article.We designed, fabricated, and experimentally characterized thin absorbers utilizing both electrical and magnetic impedance matching at the near-infrared regime. The absorbers consist of four main layers: a metal back plate, dielectric spacer, and two artificial layers. One of the artificial layers provides electrical resonance and the other one provides magnetic resonance yielding a polarization independent broadband perfect absorption. The structure response remains similar for the wide angle of incidence due to the sub-wavelength unit cell size of the constituting artificial layers. The design is useful for applications such as thermal photovoltaics, sensors, and camouflage. (C)2011 Optical Society of Americ
Results from an ethnographically-informed study in the context of test driven development
Background: Test-driven development (TDD) is an iterative software development technique where unit tests are defined before production code. Previous studies fail to analyze the values, beliefs, and assumptions that inform and shape TDD. Aim: We designed and conducted a qualitative study to understand the values, beliefs, and assumptions of TDD. In particular, we sought to understand how novice and professional software developers, arranged in pairs (a driver and a pointer), perceive and apply TDD. Method: 14 novice software developers, i.e., graduate students in Computer Science at the University of Basilicata, and six professional software developers (with one to 10 years work experience) participated in our ethnographicallyinformed study. We asked the participants to implement a new feature for an existing software written in Java. We immersed ourselves in the context of the study, and collected data by means of contemporaneous field notes, audio recordings, and other artifacts. Results: A number of insights emerge from our analysis of the collected data, the main ones being: (i) refactoring (one of the phases of TDD) is not performed as often as the process requires and it is considered less important than other phases, (ii) the most important phase is implementation, (iii) unit tests are almost never up-to-date, (iv) participants first build a sort of mental model of the source code to be implemented and only then write test cases on the basis of this model; and (v) apart from minor differences, professional developers and students applied TDD in a similar fashion. Conclusions: Developers write quick-and-dirty production code to pass the tests and ignore refactoring.Copyright is held by the owner/auther(s)
Recommended from our members
A benchmark study on the effectiveness of search-based data selection and feature selection for cross project defect prediction
Abstract
Context: Previous studies have shown that steered training data or
dataset selection can lead to better performance for cross project defect prediction(
CPDP). On the other hand, feature selection and data quality are
issues to consider in CPDP.
Objective: We aim at utilizing the Nearest Neighbor (NN)-Filter, embedded
in genetic algorithm to produce validation sets for generating evolving
training datasets to tackle CPDP while accounting for potential noise in
defect labels. We also investigate the impact of using di erent feature sets.
Method: We extend our proposed approach, Genetic Instance Selection
(GIS), by incorporating feature selection in its setting. We use 41 releases of
11 multi-version projects to assess the performance GIS in comparison with
benchmark CPDP (NN- lter and Naive-CPDP) and within project (Cross-
Validation(CV) and Previous Releases(PR)). To assess the impact of feature
sets, we use two sets of features, SCM+OO+LOC(all) and CK+LOC(ckloc)
as well as iterative info-gain subsetting(IG) for feature selection.
Results: GIS variant with info gain feature selection is signi cantly
better than NN-Filter (all,ckloc,IG) in terms of F1 (p = values 0:001,
Cohen's d = f0:621; 0:845; 0:762g) and G (p = values 0:001, Cohen's
d = f0:899; 1:114; 1:056g), and Naive CPDP (all,ckloc,IG) in terms of F1
(p = values 0:001, Cohen's d = f0:743; 0:865; 0:789g) and G (p =
values 0:001, Cohen's d = f1:027; 1:119; 1:050g). Overall, the performance
of GIS is comparable to that of within project defect prediction
(WPDP) benchmarks, i.e. CV and PR. In terms of multiple comparisons
test, all variants of GIS belong to the top ranking group of approaches.
Conclusions: We conclude that datasets obtained from search based
approaches combined with feature selection techniques is a promising way
to tackle CPDP. Especially, the performance comparison with the within
project scenario encourages further investigation of our approach. However,
the performance of GIS is based on high recall in the expense of a loss in precision.
Using di erent optimization goals, utilizing other validation datasets
and other fea
Effect of Time-pressure on Perceived and Actual Performance in Functional Software Testing
Background: Time-pressure is an inevitable reality of software
industry that influences the performance of software engineers.
It may result in adverse effects on software quality or distort the
perception of performance on executed tasks to differ from actual
performance. Objective: We aim to investigate the effect of timepressure
on perceived and actual performance of software testers in
the context of functional software testing. Method: We performed
two controlled experiments with 87 graduate students in two academic
terms. We assessed actual performance in terms of coverage
(i.e. percentage of test cases correctly identified) and perceived
performance using NASA-TLX. We have an independent factorial
design for our experimental study. Results: The results reveal a
significant effect of time-pressure on actual performance. However,
we could not observe a significant effect of time-pressure on the
perceived performance of the participants for the task undertaken.
We also observed a significant negative correlation between actual
and perceived performance when controlled for time-pressure and
experimental session factors. Conclusion: Time-pressure affects
the actual performance in a testing task but the perception of accomplishment
by the testers is sustained irrespective of time-pressure,
indicating an over-estimation issue. Perception of performance
should be adjusted to align with reality to account for the effect of
time pressure. This will lead to better self estimates of performance.Academy of Finland Projec
Ground truth deficiencies in software engineering: when codifying the past can be counterproductive
Many software engineering tools build and evaluate their models based on historical data to support development and process decisions. These models help us answer numerous interesting questions, but have their own caveats. In a real-life setting, the objective function of human decision-makers for a given task might be influenced by a whole host of factors that stem from their cognitive biases, subverting the ideal objective function required for an optimally functioning system. Relying on this data as ground truth may give rise to systems that end up automating software engineering decisions by mimicking past sub-optimal behaviour. We illustrate this phenomenon and suggest mitigation strategies to raise awareness
An external replication on the effects of test-driven development using a multi-site blind analysis approach
Context: Test-driven development (TDD) is an agile practice claimed to improve the quality of a software product, as well as the productivity of its developers. A previous study (i.e., baseline experiment) at the University of Oulu (Finland) compared TDD to a test-last development (TLD) approach through a randomized controlled trial. The results failed to support the claims. Goal: We want to validate the original study results by replicating it at the University of Basilicata (Italy), using a different design. Method: We replicated the baseline experiment, using a crossover design, with 21 graduate students. We kept the settings and context as close as possible to the baseline experiment. In order to limit researchers bias, we involved two other sites (UPM, Spain, and Brunel, UK) to conduct blind analysis of the data. Results: The Kruskal-Wallis tests did not show any significant difference between TDD and TLD in terms of testing effort (p-value = .27), external code quality (p-value = .82), and developers' productivity (p-value = .83). Nevertheless, our data revealed a difference based on the order in which TDD and TLD were applied, though no carry over effect. Conclusions: We verify the baseline study results, yet our results raises concerns regarding the selection of experimental objects, particularly with respect to their interaction with the order in which of treatments are applied. We recommend future studies to survey the tasks used in experiments evaluating TDD. Finally, to lower the cost of replication studies and reduce researchers' bias, we encourage other research groups to adopt similar multi-site blind analysis approach described in this paper.This research is supported in part by the Academy of Finland Project 278354
Recommended from our members
Search based training data selection for cross project defect prediction
Context: Previous studies have shown that steered training data or dataset selection can lead to better performance for cross project defect prediction (CPDP). On the other hand, data quality is an issue to consider in CPDP. Aim: We aim at utilising the Nearest Neighbor (NN)-Filter, embedded in a genetic algorithm, for generating evolving training datasets to tackle CPDP, while accounting for potential noise in defect labels. Method: We propose a new search based training data (i.e., instance) selection approach for CPDP called GIS (Genetic Instance Selection) that looks for solutions to optimize a combined measure of F-Measure and GMean, on a validation set generated by (NN)-filter. The genetic operations consider the similarities in features and address possible noise in assigned defect labels. We use 13 datasets from PROMISE repository in order to compare the performance of GIS with benchmark CPDP methods, namely (NN)-filter and naive CPDP, as well as with within project defect prediction (WPDP). Results: Our results show that GIS is significantly better than (NN)-Filter in terms of F-Measure (p – value ≪ 0.001, Cohen’s d = 0.697) and GMean (p – value ≪ 0.001, Cohen’s d = 0.946). It also outperforms the naive CPDP approach in terms of F-Measure (p – value ≪ 0.001, Cohen’s d = 0.753) and GMean (p – value ≪ 0.001, Cohen’s d = 0.994). In addition, the performance of our approach is better than that of WPDP, again considering F-Measure (p – value ≪ 0.001, Cohen’s d = 0.227) and GMean (p – value ≪ 0.001, Cohen’s d = 0.595) values. Conclusions: We conclude that search based instance selection is a promising way to tackle CPDP. Especially, the performance comparison with the within project scenario encourages further investigation of our approach. However, the performance of GIS is based on high recall in the expense of low precision. Using different optimization goals, e.g. targeting high precision, would be a future direction to investigate
Recommended from our members
A Systematic Literature Review and Meta-analysis on Cross Project Defect Prediction
Background: Cross project defect prediction (CPDP) recently gained considerable attention, yet there are no systematic efforts to analyse existing empirical evidence. Objective: To synthesise literature to understand the state-of-the-art in CPDP with respect to metrics, models, data approaches, datasets and associated performances. Further, we aim to assess the performance of CPDP vs. within project DP models. Method: We conducted a systematic literature review. Results from primary studies are synthesised (thematic, meta-analysis) to answer research questions. Results: We identified 30 primary studies passing quality
assessment. Performance measures, except precision, vary with the choice of metrics. Recall, precision, f-measure, and AUC are the most common measures. Models based on Nearest-Neighbour and Decision Tree tend to perform well in CPDP, whereas the popular na¨ıve Bayes yield average performance. Performance of ensembles varies greatly across f-measure and AUC. Data approaches address CPDP challenges using row/column processing, which improve CPDP in terms of recall at the cost of precision. This is observed in multiple occasions including the meta-analysis of CPDP vs. WPDP. NASA and Jureczko datasets seem to favour CPDP over WPDP more frequently. Conclusion: CPDP is still a challenge and requires more research before trustworthy applications can take place. We provide guidelines for further research
Recommended from our members
Providing tool-support for value-based decision-making: A usability assessment
Numerous companies worldwide make their
decisions related to software projects/products in a value neutral
way, using only earned value systems, which represent short-term
goals. Better decisions can be made using a value-based approach,
achieving cost-effective results and reliable construction and
maintenance of products. However, moving from a value-neutral
to a value-based paradigm can be a challenge. We provide toolsupport,
which was co-created in collaboration with three
software companies, to ease the paradigm shift. Our tool supports
both individual and group-based decisions using several
visualization mechanisms. Despite the co-creation process
employed while developing the value tool, there are specific issues
relating to its usability that must also be assessed in order to
reduce any possible drawbacks for its adoption by industry. This
paper details three usability studies that were carried out to assess
the value tool’s usability. The results also suggest that the tool is
ready to be taken into use in the industry.The research project is funded by the Finnish Funding
Agency for Technology and Innovation (Tekes), under the
FiDiPro VALUE project
- …